42 research outputs found

    Analysis of Error Control and Congestion Control Protocols

    Get PDF
    This thesis presents an analysis of a class of error control and congestion control protocols used in computer networks. We address two kinds of packet errors: (a) independent errors and (b) congestion-dependent errors. Our performance measure is the expected time and the standard deviation of the time to transmit a large message, consisting of N packets. The analysis of error control protocols. Assuming independent packet errors gives an insight on how the error control protocols should really work if buffer overflows are minimal. Some pertinent results on the performance of go-back-n, selective repeat, blast with full retransmission on error (BFRE) and a variant of BFRE, the Optimal BFRE that we propose, are obtained. We then analyze error control protocols in the presence of congestion-dependent errors. We study the selective repeat and go-back-n protocols and find that irrespective of retransmission strategy, the expected time as well as the standard deviation of the time to transmit N packets increases sharply the face of heavy congestion. However, if the congestion level is low, the two retransmission strategies perform similarly. We conclude that congestion control is a far more important issue when errors are caused by congestion. We next study the performance of a queue with dynamically changing input rates that are based on implicit or explicit feedback. This is motivated by recent proposals for adaptive congestion control algorithms where the sender\u27s window size is adjusted based on perceived congestion level of a bottleneck node. We develop a Fokker-Planck approximation for a simplified system; yet it is powerful enough to answer the important questions regarding stability, convergence (or oscillations), fairness and the significant effect that delayed feedback plays on performance. Specifically, we find that, in the absence of feedback delay, a linear increase/exponential decrease rate control algorithm is provably stable and fair. Delayed feedback, however, introduces cyclic behavior. This last result not only concurs with some recent simulation studies, it also expounds quantitatively on the real causes behind them

    On the Dynamics and Significance of Low Frequency Components of Internet Load

    Get PDF
    Dynamics of Internet load are investigated using statistics of round-trip delays, packet losses and out-of-order sequence of acknowledgments. Several segments of the Internet are studied. They include a regional network (the Jon von Neumann Center Network), a segment of the NSFNet backbone and a cross-country network consisting of regional and backbone segments. Issues addressed include: (a) dominant time scales in network workload; (b) the relationship between packet loss and different statistics of round-trip delay (average, minimum, maximum and standard-deviation); (c) the relationship between out of sequence acknowledgments and different statistics of delay; (d) the distribution of delay; (e) a comparison of results across different network segments (regional, backbone and cross-country); and (f) a comparison of results across time for a specific network segment. This study attempts to characterize the dynamics of Internet workload from an end-point perspective. A key conclusion from the data is that efficient congestion control is still a very difficult problem in large internetworks. Nevertheless, there are interesting signals of congestion that may be inferred from the data. Examples include (a) presence of slow oscillation components in smoothed network delay, (b) increase in conditional expected loss and conditional out-of-sequence acknowledgments as a function of various statistics of delay, (c) change in delay distribution parameters as a function of load, while the distribution itself remains the same, etc. The results have potential application in heuristic algorithms and analytical approximations for congestion control

    New Algorithms for Capacity Allocation and Scheduling of Multiplexed Variable Bit Rate Video Sources

    Get PDF
    This study presents simple and accurate heuristics for determining the equivalent bandwidth for multiplexed variable bit rate (VBR) video sources. The results are based on empirical studies of measurement data of various classes of VBR video sources. They are also validated through extensive simulation. The principal result is that the equivalent bandwidth per source for n independent and identically distributed VBR video sources may be approximated by a hyperbolic function of the form: a coth -1n + b where a and b are independent of n. Further, assuming ∈ is the acceptable loss tolerance, statistical regression shows that b is a linear function of mean and log ( ∈ ), while a is a polynomial in log( ∈ ). The capacity assignment problem is further augmented with a scheduling algorithm that is an extension of the Virtual Clock Algorithm. The new algorithm belongs to a class of algorithms which we refer to as Generalized Virtual Clock (GVC) algorithms. The particular GVC algorithm investigated in this paper estimates the instantaneous rate of transmission of each source, and uses the estimate instead of the static average rates, for prioritizing packets. In so doing, it attempts to synchronize the switch scheduling rates and the packet arrival rates of each source, and improves upon the spatial loss distribution characteristics of Virtual Clock. The combined allocation and scheduling algorithms are proposed as means for guaranteeing Quality of Service in high speed networks

    Analysis of Dynamic Congestion Control Protocols: A Fokker-Planck Approximation

    Get PDF
    We present an approximate analysis of a queue with dynamically changing input rates that are based on implicit or explicit feedback. This is motivated by recent proposals for adaptive congestion control algorithms [RaJa 88, Jac 88], where the sender\u27s window size at the transport level is adjusted based on perceived congestion level of a bottleneck node. We develop an analysis methodology for a simplified system; yet it is powerful enough to answer the important questions regarding stability, convergence (or oscillations), fairness and the significant effect that delayed feedback plays on performance. Specifically, we find that, in the absence of feedback delay, the linear increase/exponential decrease algorithm of Jacobson and Ramakrishnan-Jain [Jac 88, RaJa 88] is provably stable and fair. Delayed feedback on the other hand, introduces oscillations for every individual user as well as unfairness across those competing for the same resource. While the simulation study of Zhang [Zha 89] and the fluid-approximation study of Bolot and Shanker [BoSh 90] have observed the oscillations in cumulative queue length and measurements by Jacobson [Jac 88] have revealed some of the unfairness properties, the reasons for these have not been identified. We identify quantitatively the cause of these effects, via-a-vis the systems parameters and properties of the algorithm used. The model presented is fairly general and can be applied to evaluate the performance of a wide range of feedback control schemes. It is an extension of the classical Fokker-Planck equation. Therefore, it addresses traffic viability (to some extent) that fluid approximation techniques do not address

    A Proof of Quasi-Independence Of Sliding Window Flow Control and Go-Back-N Error Recovery Under Independent Packet Errors

    No full text
    A quasi-independence result holds for the go-back-n automatic repeat request #ARQ# protocol and the sliding window#ow control protocol if packet errors are independent. The result is independent of the magnitude of the packet error probability or the cost of an error. A parallel result for the selective repeat ARQ protocol, however, does not appear to hold. Keywords: Error Recovery Protocols, ARQ, Go-back-N, Window Flow Control. College of Computing Georgia Institute of Technology Atlanta, Georgia 30332#0280 # To be published in Computer Networks and ISDN Systems. This researchwas supported in part by the National Science Foundation under grant NCR91-16117. Parts of this paper were presented as evidence of quasi-independence in Sigmetrics 1990#7#. The proof of quasi-independence #Section 4# is new. 1 Introduction The objective of this paper is to show that under certain conditions, the performance of the go-back-n error recovery protocol and the sliding window#ow control protocol a..

    Traffic Signatures for Network Engineering

    No full text
    Traffic over compus level networks... The objective of this article is to briefly overview our talk, and present it in the context of related results. It is organized as follows. Section 2 presents two traffic signature models: (i) a non-parametric trace-sampling model, and (ii) parametric timeseries models (seasonal ARIMA models). Section 3 presents related work in the field. Section 4 discusses predictive control briefly. Section 5 presents some concluding remarks

    On Resource Management and QoS Guarantees For Long Range Dependent Traffic

    No full text
    It has been known for several years now that variable-bit-rate video sources are strongly auto-correlated. Recently, several studies have indicated that the resulting stochastic processes exhibit long-range dependence properties. This implies that large buffers at intermediate switching points may not provide adequate delay performance for such classes of traffic in Broadband packet-switched networks (such as ATM). In this paper, we study the effect of long-memory processes on queue length statistics of a single queue system through a controlled fractionally differenced ARIMA(1; d; 0) input process. This process has two parameters OE 1 (0 OE 1 1) and d (0 d ! 1=2) representing an auto-regressive component and a long-range dependent component, respectively. Results show that the queue length statistics studied (mean, variance and the 0:999 quantile) are proportional to e c1 OE 1 e c2 d ; where (c 1 ; c 2 ) are positive constants, and c 2 ? c 1 : The effect of the auto-correlation..

    Delay-jitter Bound and Statistical Loss Bound for Heterogeneous Correlated Traffic --- Architecture and Equivalent Bandwidth

    No full text
    Network support for variable bit-rate video needs to consider (i) properties of workload induced (e.g., significant auto-correlations into far lags and different marginal distributions among connections) and (ii) application specific bounds on delay-jitter and statistical cell-loss probabilities. The objective of this paper is to present a quality-of-service solution for such traffic at each multiplexing point in the network. Heterogeneity in both offered workload and quality-of-service needs are addressed. The paper has several parts: (i) motivation for the proposed architecture, including an empirical model for selected queue statistics when the input process is a fractionally-differenced ARIMA(1,d,0) process and the server is work conserving; (ii) a framing-strategy with active cell-discard to address the impact of inter-frame dependencies that would otherwise increase queue lengths, while simultaneously providing delay-jitter bounds; (iii) a pseudo earliest-due-date cell distatcher for resolving competing deadlines, addressing cell-loss bounds and fairness across virtual circuits, and maximizing output-channel efficiency;(iv) upper bounds on equivalent bandwidth needed for heterogeneous delay-jitter bounds and heterogeneous cell-loss probabilities for traffic with heterogeneous arrival statistics. This paper assimilates and builds on the results of a number of authors, notably those of Golestani (Stop-and-Go framing), Garrett and Willinger and Pancha and El Zarki (variable bit-rate video traffic models and related multiplexor performance studies), and Yang and Pan (optimal space-conserving loss schedulers)
    corecore